MGEM Mixed Pixel Lab

Welcome to the Malcolm Knapp Research Forest! During your time in the MGEM program, you will be exposed to a wide range of remote sensing and GIS technologies, data sets and workflows that equip you to answer questions about our environment. Remote sensing data sets can typically be characterized by three core elements: temporal resolution, spectral resolution, and spatial resolution. Temporal resolution refers to the revisit time of a sensor, aka how long it takes to complete full coverage of the earth for satellite based sensors. Spectral resolution refers to unique portions of the electromagnetic spectrum captured by a sensor. And finally, the spatial resolution of a sensor refers to the dimensions of a pixel captured by that sensor. Depending on satellite orbit and instrument design, the spatial resolution (pixel size) of remote sensing datasets vary from coarse (i.e. 50km SMOS Pixels) to fine scales (i.e. 3m Planetscope).To learn more about the different satellite platforms and their purpose, orbits and owners, I recommend browsing the SatelliteXplorer!
Introduction
Image Classification & Mixed Pixel Problems
Landscape-level analysis of satellite data often requires that pixels be classified using comprehensive categories or descriptors. For example, quantifying changes in forest cover over time requires identifying which pixels represent forest, and which do not. Images can be classified into only a few classes (e.g. forest or non-forest), or many classes representing more complex landscapes (e.g. deciduous, broadleaf, mixed-wood, treed wetland). Depending on the spatial resolution of the data set you are working with, the land cover composition within a pixel may comprise more than one of these classes. This is commonly referred to as the ‘Mixed Pixel Problem’, and introduces uncertainty in classification tasks. In this exercise, you will simulate the spatial resolutions of three popular satellite remote sensing platforms: PlanetScope, Sentinel-2, and Landsat.
By mapping out “pixels” on the landscape at MKRF, you will investigate the effect of the mixed pixel problem on your ability to classify the landscape into meaningful categories. The main goals for the day are a) to experience what the spatial resolution of some global satellite data sets look like on the ground, and b) to understand the limitations of representing complex land cover through the classification of satellite data pixels.
Pixel Mapping
The first part of this exercise involves mapping out your own ‘pixels’ in the MKRF research forest, and observing the landscape features that each of these pixels contain. For this exercise, you need to form into 6 groups, which will be provided with a compass and transect tape. You will also need to assign 1 note-taker to mark down your observations in the field. When you are ready:
- Locate your first study site on the interactive map in Part 2. In the field, these sites will be marked with a cone.You can also enable your live GPS location on the map in case you are not sure if you are in the right place. Once you arrive at the plot, record the plot center using the provided GPS
- Map out a 3-meter PlanetScope pixel around the cone, using a compass and the transect tape provided. Orient your imaginary grid towards true north. Mark the corners of the pixel with your group members. (HINT: the magnetic declination at Loon Lake is +16°). You will have to adjust your compass accordingly. If you are using a compass app on your phone, make sure that true north is enabled.)
- Decide if the pixel is mixed or homogeneous and note down your response.
- As a group, discuss and record the features visible on the landscape.
- Based on the recorded features, come up with a land cover class to assign to for each platform, in each pixel. This step is somewhat subjective; you can disagree with your group members!
- Repeat these steps for a 10-meter Sentinel 2 pixel and a 30-meter Landsat pixel.
- Once you have finished the steps above at your site, locate your next plot and repeat.

Discussion Questions
Add your answers to the table
- Were there any sites dominated by one particular land cover class for all three resolutions / platforms?
- Imagine each pixel in the year 2000. Look for clues about the site’s history. Do you think that you would have assigned it to a different land cover class 20 years ago?
- Is the value of a pixel determined equally by reflectance from the center and reflectance from the corners? In other words, does the sensor “see” the entire area represented by a pixel?
Once you are done filling out the table by the end of the lab, click the ‘pdf’ button to export your table.
Imagery Comparison
Now that you have taken some detailed field notes for each of the sites, you will compare your observations to the Landsat, Sentinel-2 and Planetscope satellite imagery of MKRF, visualized here using the Red (R), Green (G) and Blue (B) bands as a ‘True Colour’ composite. The map slider also allows you to visualize the Normalized Difference Vegetation Index (NDVI), and indicator of vegetation vigor. As you compare, consider the spectral values of the different platform pixels corresponding to each site.
- Locate each site on the images of the study area and identify the pixel in the imagery corresponding to the site.
- Describe the pixel in the datasheet. What is its color?
- Look at the NDVI images and estimate the value for the pixel at each site.
Discussion Questions
Why do you think that the range of NDVI values differs so much between sensors?
What are the brightest and darkest areas in each image?